Can I choose what I do everyday?
A sample measure using personal survey feedback

Overview

There are two different summary scores which were derived from the survey for the Choice in daily activities set of questions.

Composite Score

In the area of Choice in daily activities the composite quality score was 77.3%. This means that out of all of the times that people answered questions related to Choice in daily activities (n = 17924), their responses indicated a positive experience in 13861 instances.

Consistent Performance

The graph below shows the consistency of personal experience across different questions related to choice in daily activities:


To explore how survey responses might be used to understand personal experiences within HCBS settings, we began by developing a measure for a single topic: “Do people have the ability to make choices in their daily activities?” This topic was selected because Choice and decision making was one of several areas which were prioritized by the Implementation Advisory Group (IAG) for inclusion in a definition of quality.

What is a composite measure?

There are lots of reports out there. In order to decrease the total number of measures requiring interpretation by stakeholders, a specific measure can combine survey questions into groups of conceptually coherent similar items. Measures made by combining these questions are called composite measures.

What does this include?
Specific survey questions making up this measure


This panel shows which specific questions make up the sample measure, and how survey respondents answered each of these questions.

Considerations

It is important to understand the following details when interpreting this measure:

A table summarizing excluded responses is shown below:

Question Response n %
Visitors to your home: Can friends and family visit you without rules on hours or times? Missing 56 1.2
Do you pick what you eat? Missing 56 1.2
Do you pick what clothes to wear? Missing 56 1.2
Do you (with or without supports) arrange and control your personal schedule of daily appointments and activities (e.g. personal care, events, etc.)?? Missing 56 1.2

What other questions are on the survey?
Available survey questions and domains


Other Questions on the Survey

The sample measure that was discussed in the previous panels uses only a few of the many questions that are available on the HCBS Participant Survey. On the table to the right, you can see each of the questions from that survey choice has been mapped to a Quality of Life (QoL) domain, and the specific clusters of questions which have been grouped together for consideration as potential Composite Measures. If you filter the Composite Measure and select a single option, the table will display all of the questions which were flagged as related to that characteristic of quality.

Person-Reported Outcomes

In quality measurement, participant surveys such as the MDHHS and MI-DDI HCBS Survey are referred to as person-reported outcomes or experience of care measures. According to AHRQ, these measures are intended to tell us “whether something that should happen…actually happened or how often it happened”: a question which the person receiving services and supports is uniquely qualified to answer.

---
title: "HCBS Quality Sample"
output: 
  flexdashboard::flex_dashboard:
    storyboard: true
    theme: yeti
    source: embed
---

```{r setup, include=FALSE}
library(tidyverse); library(flexdashboard); library(plotly)
hcbs_measure_groups <- 
  read_csv("../data/mapping_questions.csv") %>%
  mutate(
    field_desc = str_trim(field_desc),
    field_desc = str_replace(field_desc," who do not have disabilities\\?$","?"),
    sub_domain = recode(
      sub_domain,
      `Choice of provider` = "Choice of provider and services",
      `Choice of services` = "Choice of provider and services"
    )
  ) %>%
  # Exclude questions removed from future surveys (to allow comparison over time)
  filter(
    !participant_field %in% c(
      "Q40","Q79","Q62","Q64","Q66","Q70","Q50","Q53",
      "Q112","Q22","Q89","Q90","Q93","Q100"
    )
  )

hcbs_df <- 
  feather::read_feather("../data/hcbs_long.feather") %>%
  # Join to subset of domain fields
  inner_join(
    hcbs_measure_groups %>%
    select(
      field = participant_field,
      measure_group  = domain,
      measure_domain = sub_domain
    ), 
    by = "field") %>%
  select(-wsa_id) %>%
  # Exclude provider responses not mapped to a participant response
  filter(is.na(response_id) == F) 




```

```{r measure_df}

measure_df <-
  hcbs_df %>%
  group_by(measure_group,measure_domain,provider_type,field,field_desc,response) %>%
  summarize(
    n = n_distinct(response_id)
  ) %>%
  group_by(measure_domain,field,field_desc) %>%
  mutate(
    pct = round(n / sum(n) * 100, digits = 1)
  ) %>%
  filter(is.na(measure_domain) == F)

```

### Can I choose what I do everyday?
A sample measure using personal survey feedback {data-commentary-width=400} ```{r make_measure_df} activity_df <- measure_df %>% filter(measure_domain == "Choice in daily activities") activity_detail <- hcbs_df %>% filter(measure_domain == "Choice in daily activities") %>% filter(response %in% c("Yes","No")) %>% select(response_id,measure_domain,field,field_desc,response) %>% mutate(response_log = response == "Yes") %>% group_by(response_id,measure_domain) %>% summarize( valid_responses = n(), pos_responses = sum(response_log) ) %>% mutate( top_box = pos_responses == valid_responses, bottom_box = pos_responses == 0, middle_box = top_box == F & bottom_box == F ) %>% group_by(measure_domain) %>% summarize( total_answers = sum(valid_responses), positive_answers = sum(pos_responses), total_surveys = n_distinct(response_id), topbox_surveys = sum(top_box), bottombox_surveys = sum(bottom_box), middlebox_surveys = sum(middle_box) ) %>% mutate( weighted_pct = round(positive_answers / total_answers * 100, digits = 1), topbox_score = round(topbox_surveys / total_surveys * 100, digits = 1), bottombox_score = round(bottombox_surveys / total_surveys * 100, digits = 1), middlebox_score = round(middlebox_surveys / total_surveys * 100, digits = 1) ) ``` **Overview** There are two different summary scores which were derived from the survey for the *Choice in daily activities* set of questions. - *Composite Score*: The composite score takes the percentage of questions which received a positive response, indicating the presence of choice in daily activities, in relation to the total number of responses which had either a positive or negative response. - *Consistent Performance*: If our goal is that people's experience of HCBS settings be consistently positive, then it is important to look at an individual's experience across all of the questions that address a particular topic. These scores look at how often people's experience was *consistently positive* (i.e. they responded 'Yes' to each question that made up the measure), or *consistently negative* (i.e. they responded 'No' to all of the questions that made up the measure). **Composite Score** In the area of `r activity_detail$measure_domain[1]` the composite quality score was `r activity_detail$weighted_pct[1]`%. This means that out of all of the times that people answered questions related to `r activity_detail$measure_domain[1]` (n = `r activity_detail$total_answers[1]`), their responses indicated a positive experience in `r activity_detail$positive_answers[1]` instances. **Consistent Performance** The graph below shows the consistency of personal experience across different questions related to choice in daily activities: ```{r} activity_detail %>% select( measure_domain, starts_with("topbox"),starts_with("middlebox"),starts_with("bottombox"), total_surveys ) %>% gather( score_type,score,-measure_domain,-ends_with("surveys") ) %>% mutate( score_type = recode( score_type, `topbox_score` = "consistently positive", `middlebox_score` = "mixed", `bottombox_score` = "consistently negative" ), score_type = fct_relevel( score_type, "consistently positive","mixed","consistently negative" ) ) %>% plot_ly( x = ~score, y = ~measure_domain, color = ~score_type, colors = c("#3B9AB2", "#78B7C5", "#EBCC2A", "#E1AF00", "#F21A00"), orientation = 'h', width = 800, height = 300 ) %>% add_bars( hoverinfo = 'text', text = ~paste( score,"% of respondents had
", score_type," experiences
", "related to ",measure_domain ) ) %>% layout( showlegend = FALSE, xaxis = list(title = "% of survey respondents"), yaxis = list( title = "", tickfont = list(size = 10) ), autosize = F, margin = list( l = 150, r = 50, b = 90, t = 40, pad = 4 ), barmode = 'stack' ) ``` *** To explore how survey responses might be used to understand personal experiences within HCBS settings, we began by developing a measure for a single topic: "Do people have the ability to make choices in their daily activities?" This topic was selected because *Choice and decision making* was one of several areas which were prioritized by the Implementation Advisory Group (IAG) for inclusion in a definition of quality. **What is a composite measure?** There are lots of reports out there. In order to decrease the total number of measures requiring interpretation by stakeholders, a specific measure can combine survey questions into groups of conceptually coherent similar items. Measures made by combining these questions are called *composite measures*. ### What does this include?
Specific survey questions making up this measure {data-commentary-width=400} ```{r question_barchart} activity_df %>% filter(response %in% c("Yes","No")) %>% mutate(response = fct_relevel(response,"Yes","No")) %>% # Recalculate % with exclusions removed group_by(measure_domain,field,field_desc) %>% mutate( pct = round(n / sum(n) * 100, digits = 1) ) %>% plot_ly( x = ~pct, y = ~str_wrap(field_desc,40), color = ~response, colors = c("#3B9AB2", "#78B7C5", "#EBCC2A", "#E1AF00", "#F21A00"), orientation = 'h' ) %>% add_bars( hoverinfo = 'text', text = ~paste( "When asked the question
", "",str_wrap(field_desc,40),"
", n," respondents (",pct,"%) replied
", "",response ) ) %>% layout( xaxis = list(title = "% of survey respondents"), yaxis = list( title = "", tickfont = list(size = 10) ), margin = list( l = 250, r = 50, b = 60, t = 50, pad = 4 ), barmode = 'stack' ) ``` *** This panel shows which specific questions make up the sample measure, and how survey respondents answered each of these questions. **Considerations** It is important to understand the following details when interpreting this measure: - *Based on participant responses*: This measure is based on responses from people who receive services, not responses from the providers who answered similar survey questions. - *Approach to weighting responses*: There are two basic options to weighting composite survey responses: equal or unequal. Equal weighting treats all questions in the composite as equally important, even though some items may be answered more frequently than others. Unequal weighting suitably discounts items with a lower volume of responses, yielding a composite that is more statistically precise. Since response rates to the HCBS survey questions are different across questions, we use an unequal rating strategy. - *Missing responses are excluded*: Since survey responses were voluntary for participants, not all respondents provided answers to every question. Because of this, the number of total responses can vary by question. - *Some other response options are filtered out*: Most questions on the survey ask for *Yes* or *No* responses. For some questions, however, there are additional options such as *I don't know* or *Does not apply*. A table summarizing excluded responses is shown below: ```{r} activity_df %>% ungroup() %>% filter(!response %in% c("Yes","No")) %>% select(field_desc,response,n,pct) %>% mutate(response = ifelse(is.na(response),"Missing",response)) %>% knitr::kable( col.names = c("Question","Response","n","%") ) ``` ### What other questions are on the survey?
Available survey questions and domains {data-commentary-width=400} ```{r ref_tbl} library(DT) hcbs_measure_groups %>% select(domain,sub_domain,field_desc,participant_field) %>% mutate_at(vars(domain,sub_domain),funs(as.factor)) %>% group_by_all() %>% DT::datatable( rownames = FALSE, filter = "top", colnames = c( 'QoL Domain', 'Composite Measure', 'Question Text', 'Question ID' ), extensions = c('Responsive') ) ``` *** **Other Questions on the Survey** The sample measure that was discussed in the previous panels uses only a few of the many questions that are available on the HCBS Participant Survey. On the table to the right, you can see each of the questions from that survey choice has been mapped to a Quality of Life (QoL) domain, and the specific clusters of questions which have been grouped together for consideration as potential *Composite Measure*s. If you filter the *Composite Measure* and select a single option, the table will display all of the questions which were flagged as related to that characteristic of quality. **Person-Reported Outcomes** In quality measurement, participant surveys such as the MDHHS and MI-DDI HCBS Survey are referred to as [*person-reported outcomes*](https://www.qualityforum.org/Projects/n-r/Patient-Reported_Outcomes/Patient-Reported_Outcomes.aspx) or [*experience of care*](http://blog.ncqa.org/the-q-series-what-are-the-types-of-quality-measures/) measures. According to [AHRQ](https://www.ahrq.gov/cahps/about-cahps/patient-experience/index.html), these measures are intended to tell us "*whether something that should happen...actually happened or how often it happened*": a question which the person receiving services and supports is uniquely qualified to answer.